71 research outputs found

    To screen, or not to screen:An experimental comparison of two methods for correlating video game loot box expenditure and problem gambling severity

    Get PDF
    Loot boxes are gambling-like products found in video games that players can buy with real-world money to obtain random rewards. A positive correlation between loot box spending and problem gambling severity has been well-replicated. Some researchers recently argued that this observed positive correlation may be due to participants incorrectly interpreting problem gambling questions as applying to their loot box expenditure because they see loot box purchasing as a form of ‘gambling.’ We experimentally tested this alternative explanation for the observed positive correlation (N = 2027), by manipulating whether all participants were given the problem gambling scale as the previous literature generally had (the ‘non-screening’ approach; n = 1005), or by ‘screening’ participants (n = 1022) by only giving the problem gambling scale to those reporting recent gambling expenditure. Through the latter screening process, we clarified and calibrated what ‘gambling’ means by providing an exhaustive list of activities that should be accounted for and specifically instructed participants that loot box purchasing is not to be considered a form of ‘gambling.’ Results showed positive correlations between loot box spending and problem gambling across both experimental conditions. In addition, a predicted positive correlation emerged between binary past-year gambling participation and loot box expenditure in the screening group. These experimental results confirm that the association between loot box spending and problem gambling severity is likely not due to participants misinterpreting problem gambling questions as being relevant to their loot box spending. However, problem gambling severity was inflated in the non-screening group, meaning that future research on gambling-like products should include gambling participation screening questions; better define what ‘gambling’ means; potentially exclude non-gamblers from analysis; and, importantly, explicit instructions on whether certain activities should not be considered a form of ‘gambling.

    A time-predefined approach to course timetabling

    Get PDF
    A common weakness of local search metaheuristics, such as Simulated Annealing, in solving combinatorial optimization problems, is the necessity of setting a certain number of parameters. This tends to generate a significant increase in the total amount of time required to solve the problem and often requires a high level of experience from the user. This paper is motivated by the goal of overcoming this drawback by employing "parameter-free" techniques in the context of automatically solving course timetabling problems. We employ local search techniques with "straightforward" parameters, i.e. ones that an inexperienced user can easily understand. In particular, we present an extended variant of the "Great Deluge" algorithm, which requires only two parameters (which can be interpreted as search time and an estimation of the required level of solution quality). These parameters affect the performance of the algorithm so that a longer search provides a better result - as long as we can intelligently stop the approach from converging too early. Hence, a user can choose a balance between processing time and the quality of the solution. The proposed method has been tested on a range of university course timetabling problems and the results were evaluated within an International Timetabling Competition. The effectiveness of the proposed technique has been confirmed by a high level of quality of results. These results represented the third overall average rating among 21 participants and the best solutions on 8 of the 23 test problems.

    Dealing with Time in Health Economic Evaluation: Methodological Issues and Recommendations for Practice

    Get PDF
    Time is an important aspect of health economic evaluation, as the timing and duration of clinical events, healthcare interventions and their consequences all affect estimated costs and effects. These issues should be reflected in the design of health economic models. This article considers three important aspects of time in modelling: (1) which cohorts to simulate and how far into the future to extend the analysis; (2) the simulation of time, including the difference between discrete-time and continuous-time models, cycle lengths, and converting rates and probabilities; and (3) discounting future costs and effects to their present values. We provide a methodological overview of these issues and make recommendations to help inform both the conduct of cost-effectiveness analyses and the interpretation of their results. For choosing which cohorts to simulate and how many, we suggest analysts carefully assess potential reasons for variation in cost effectiveness between cohorts and the feasibility of subgroup-specific recommendations. For the simulation of time, we recommend using short cycles or continuous-time models to avoid biases and the need for half-cycle corrections, and provide advic

    Dealing with Time in Health Economic Evaluation: Methodological Issues and Recommendations for Practice

    Get PDF
    Time is an important aspect of health economic evaluation, as the timing and duration of clinical events, healthcare interventions and their consequences all affect estimated costs and effects. These issues should be reflected in the design of health economic models. This article considers three important aspects of time in modelling: (1) which cohorts to simulate and how far into the future to extend the analysis; (2) the simulation of time, including the difference between discrete-time and continuous-time models, cycle lengths, and converting rates and probabilities; and (3) discounting future costs and effects to their present values. We provide a methodological overview of these issues and make recommendations to help inform both the conduct of cost-effectiveness analyses and the interpretation of their results. For choosing which cohorts to simulate and how many, we suggest analysts carefully assess potential reasons for variation in cost effectiveness between cohorts and the feasibility of subgroup-specific recommendations. For the simulation of time, we recommend using short cycles or continuous-time models to avoid biases and the need for half-cycle corrections, and provide advice on the correct conversion of transition probabilities in state transition models. Finally, for discounting, analysts should not only follow current guidance and report how discounting was conducted, especially in the case of differential discounting, but also seek to develop an understanding of its rationale. Our overall recommendations are that analysts explicitly state and justify their modelling choices regarding time and consider how alternative choices may impact on results

    How does the phrasing of house edge information affect gamblers’ perceptions and level of understanding? A registered report

    Get PDF
    The provision of information to consumers is a common input to tackling various public health issues. By comparison to the information given on food and alcohol products, information on gambling products is either not given at all, or shown in low-prominence locations in a suboptimal format, e.g. the ‘return-to-player’ format, ‘this game has an average percentage payout of 90%’. Some previous research suggests that it would be advantageous to communicate this information via the ‘house edge’ format instead: the average loss from a given gambling product, e.g. ‘this game keeps 10% of all money bet on average’. However, previous empirical work on the house edge format only uses this specific phrasing, and there may be better ways of communicating house edge information. The present work experimentally tested this original phrasing of the house edge against an alternative phrasing that has also been proposed, ‘on average this game is programmed to cost you 10% of your stake on each bet’, while both phrasings were also compared against equivalent return-to-player information (N = 3333 UK-based online gamblers). The two dependent measures were gamblers’ perceived chances of winning and a measure of participants’ correct understanding. Preregistered Stage 1 protocol: https://osf.io/5npy9 (date of in-principle acceptance: 28/11/2022). The alternative house edge phrasing resulted in the lowest perceived chances of winning, but the original phrasing had the highest rate of correct understanding. Compared to return-to-player information, the original phrasing had both lower perceived chances of winning and higher rates of correct understanding, while the alternative phrasing had only lower perceived chances of winning. These results replicated prior work on the advantages of the original house edge phrasing over return-to-player information, while showing that the alternative house edge phrasing has advantageous properties for gamblers’ perceived chances of winning only. The optimal communication of risk information can act as an input to a public health approach to reducing gambling-related harm

    High fidelity simulation of the endoscopic transsphenoidal approach: Validation of the UpSurgeOn TNS Box

    Get PDF
    Objective: Endoscopic endonasal transsphenoidal surgery is an established technique for the resection of sellar and suprasellar lesions. The approach is technically challenging and has a steep learning curve. Simulation is a growing training tool, allowing the acquisition of technical skills pre-clinically and potentially resulting in a shorter clinical learning curve. We sought validation of the UpSurgeOn Transsphenoidal (TNS) Box for the endoscopic endonasal transsphenoidal approach to the pituitary fossa./ Methods: Novice, intermediate and expert neurosurgeons were recruited from multiple centres. Participants were asked to perform a sphenoidotomy using the TNS model. Face and content validity were evaluated using a post-task questionnaire. Construct validity was assessed through post-hoc blinded scoring of operative videos using a Modified Objective Structured Assessment of Technical Skills (mOSAT) and a Task-Specific Technical Skill scoring system./ Results: Fifteen participants were recruited of which n = 10 (66.6%) were novices and n = 5 (33.3%) were intermediate and expert neurosurgeons. Three intermediate and experts (60%) agreed that the model was realistic. All intermediate and experts (n = 5) strongly agreed or agreed that the TNS model was useful for teaching the endonasal transsphenoidal approach to the pituitary fossa. The consensus-derived mOSAT score was 16/30 (IQR 14–16.75) for novices and 29/30 (IQR 27–29) for intermediate and experts (p < 0.001, Mann–Whitney U). The median Task-Specific Technical Skill score was 10/20 (IQR 8.25–13) for novices and 18/20 (IQR 17.75–19) for intermediate and experts (p < 0.001, Mann-Whitney U). Interrater reliability was 0.949 (CI 0.983–0.853) for OSATS and 0.945 (CI 0.981–0.842) for Task-Specific Technical Skills. Suggested improvements for the model included the addition of neuro-vascular anatomy and arachnoid mater to simulate bleeding vessels and CSF leak, respectively, as well as improvement in materials to reproduce the consistency closer to that of human tissue and bone./ Conclusion: The TNS Box simulation model has demonstrated face, content, and construct validity as a simulator for the endoscopic endonasal transsphenoidal approach. With the steep learning curve associated with endoscopic approaches, this simulation model has the potential as a valuable training tool in neurosurgery with further improvements including advancing simulation materials, dynamic models (e.g., with blood flow) and synergy with complementary technologies (e.g., artificial intelligence and augmented reality)

    Creative destruction in science

    Get PDF
    Drawing on the concept of a gale of creative destruction in a capitalistic economy, we argue that initiatives to assess the robustness of findings in the organizational literature should aim to simultaneously test competing ideas operating in the same theoretical space. In other words, replication efforts should seek not just to support or question the original findings, but also to replace them with revised, stronger theories with greater explanatory power. Achieving this will typically require adding new measures, conditions, and subject populations to research designs, in order to carry out conceptual tests of multiple theories in addition to directly replicating the original findings. To illustrate the value of the creative destruction approach for theory pruning in organizational scholarship, we describe recent replication initiatives re-examining culture and work morality, working parents\u2019 reasoning about day care options, and gender discrimination in hiring decisions. Significance statement It is becoming increasingly clear that many, if not most, published research findings across scientific fields are not readily replicable when the same method is repeated. Although extremely valuable, failed replications risk leaving a theoretical void\u2014 reducing confidence the original theoretical prediction is true, but not replacing it with positive evidence in favor of an alternative theory. We introduce the creative destruction approach to replication, which combines theory pruning methods from the field of management with emerging best practices from the open science movement, with the aim of making replications as generative as possible. In effect, we advocate for a Replication 2.0 movement in which the goal shifts from checking on the reliability of past findings to actively engaging in competitive theory testing and theory building. Scientific transparency statement The materials, code, and data for this article are posted publicly on the Open Science Framework, with links provided in the article

    Genomic investigations of unexplained acute hepatitis in children

    Get PDF
    Since its first identification in Scotland, over 1,000 cases of unexplained paediatric hepatitis in children have been reported worldwide, including 278 cases in the UK1. Here we report an investigation of 38 cases, 66 age-matched immunocompetent controls and 21 immunocompromised comparator participants, using a combination of genomic, transcriptomic, proteomic and immunohistochemical methods. We detected high levels of adeno-associated virus 2 (AAV2) DNA in the liver, blood, plasma or stool from 27 of 28 cases. We found low levels of adenovirus (HAdV) and human herpesvirus 6B (HHV-6B) in 23 of 31 and 16 of 23, respectively, of the cases tested. By contrast, AAV2 was infrequently detected and at low titre in the blood or the liver from control children with HAdV, even when profoundly immunosuppressed. AAV2, HAdV and HHV-6 phylogeny excluded the emergence of novel strains in cases. Histological analyses of explanted livers showed enrichment for T cells and B lineage cells. Proteomic comparison of liver tissue from cases and healthy controls identified increased expression of HLA class 2, immunoglobulin variable regions and complement proteins. HAdV and AAV2 proteins were not detected in the livers. Instead, we identified AAV2 DNA complexes reflecting both HAdV-mediated and HHV-6B-mediated replication. We hypothesize that high levels of abnormal AAV2 replication products aided by HAdV and, in severe cases, HHV-6B may have triggered immune-mediated hepatic disease in genetically and immunologically predisposed children
    corecore